This report describes the results of the first priming mindfulness study, as well as of the two online pilot studies from the Varela project. It was initially made using Dominique Makowski’s Supplementary Materials template.
This is an exploratory (not confirmatory) study. The result of this exploration will be used to conduct a second, confirmatory, preregistered study.
library(rempsyc)
library(dplyr)
library(interactions)
library(performance)
library(see)
library(patchwork)
library(ggplot2)
library(rstatix)
library(DescTools)
library(report)
library(bestNormalize)
summary(report(sessionInfo()))The analysis was done using the R Statistical language (v4.2.0; R Core Team, 2022) on Windows 10 x64, using the packages bestNormalize (v1.8.2), DescTools (v0.99.44), ggplot2 (v3.3.5), interactions (v1.1.5), performance (v0.9.0), see (v0.7.0), report (v0.5.1), dplyr (v1.0.8), patchwork (v1.1.1), rempsyc (v0.0.4.0) and rstatix (v0.7.0).
data <- read.csv("data/fulldataset.csv")
# Dummy-code group variable
data <- data %>%
mutate(condition_dum = ifelse(condition == "Mindfulness", 1, 0),
condition = as.factor(condition))
cat(report_participants(data))272 participants ()
report(data$country)x: 8 entries, such as USA (93.75%); Canada (1.47%); India (1.47%) and 5 others (4 missing)
# Allocation ratio
report(data$condition)x: 2 levels, namely Control (n = 139, 51.10%) and Mindfulness (n = 133, 48.90%)
In this stage, we define a list of our relevant variables and standardize them according to the Median Absolute Deviation (MAD), which is more robust to extreme observations than standardization around the mean.
# Make list of DVs
col.list <- c("blastintensity", "blastduration", "blastintensity.duration",
"blastintensity.first", "blastduration.first",
"blastintensity.duration.first", "KIMS", "BSCS", "BAQ",
"SOPT", "IAT")
# Create new variable blastintensity.duration
data$blastintensity.duration <- (data$blastintensity * data$blastduration)
data$blastintensity.duration.first <- (data$blastintensity.first *
data$blastduration.first)
# Divide by 2? Do some other sort of transformation given I multiplied two scores?
# Should I multiply them after standardization or before?Why combine the intensity and duration scores? Should we? For a discussion, see:
Elson, M., Mohseni, M. R., Breuer, J., Scharkow, M., & Quandt, T. (2014). Press CRTT to measure aggressive behavior: the unstandardized use of the competitive reaction time task in aggression research. Psychological assessment, 26(2), 419. https://doi.org/10.1037/a0035569
Why use the first sound blast only instead of the average of all trials? Should we?
According to some, the Taylor Aggression Paradigm is not a measure of aggression per say, but of reactive aggression, because participants react to the other “participant’s” aggression. They suggest that for a pure measure of aggression, it is recommended to use only the first sound blast used by the participant before he receives one himself. At this stage, we attempt the analyses with all these different measures of aggression for exploratory purposes. See earlier reference to Elson et al. (2014):
First, we know that we only want to keep participants who had at least an 80% success rate in the critical experimental manipulation task. Let’s see how many participants have less than an 80% success rate.
data %>%
filter(manipsuccessleft < .80) %>%
count()| n |
|---|
| 17 |
There’s 17 people, let’s exclude them.
data <- data %>%
filter(manipsuccessleft >= .80)
cat(report_participants(data))255 participants ()
In this section, we will: (a) test assumptions of normality, (b) transform variables violating assumptions, (c) test assumptions of homoscedasticity, (d) identify and winsorize outliers, and (e) conduct the t-tests.
lapply(col.list, function(x)
nice_normality(data,
variable = x,
title = x,
group = "condition",
shapiro = TRUE,
histogram = TRUE))## [[1]]
##
## [[2]]
##
## [[3]]
##
## [[4]]
##
## [[5]]
##
## [[6]]
##
## [[7]]
##
## [[8]]
##
## [[9]]
##
## [[10]]
##
## [[11]]
Several variables are clearly skewed. Let’s apply transformations. But first, let’s deal with the working memory task, SOPT (Self-Ordered Pointing Task). It is clearly problematic.
Let’s do an histogram proper to see if it helps diagnosing the problem with SOPT.
hist(data$SOPT)That looks weird, there’s some obvious outliers here; they probably didn’t do the task correctly, especially since there’s a gap after 60 errors. Let’s see how many people made more than 60 errors.
data %>%
filter(SOPT < -60) %>%
count| n |
|---|
| 10 |
There’s 10 people with more than 60 errors. Let’s exclude them.
data <- data %>%
filter(SOPT >= -60)
cat(report_participants(data))245 participants ()
Normally, the SOPT raw scores represent the number of errors, but I had multiplied it by -1 initially so that a smaller score would mean lower working memory capacity. Here we reverse it again to be able to use the various transformations. We also add a constant of 1 to avoid scores of zero which can interfere with the transformation (and so the Box Cox transformation can be considered).
data <- data %>%
mutate(SOPT = SOPT * -1 + 1)The function below transforms variables according to the best
possible transformation (via the bestNormalize package),
and also standardizes the variables.
predict_bestNormalize <- function(var, print.transform = TRUE, standardize = TRUE) {
x <- bestNormalize(var, standardize = standardize)
if (print.transform == TRUE) {
print(cur_column())
print(x$chosen_transform)
cat("\n")
}
predict(x)
}
set.seed(100)
data <- data %>%
mutate(across(all_of(col.list),
predict_bestNormalize,
.names = "{.col}.BN"))## [1] "blastintensity"
## orderNorm Transformation with 245 nonmissing obs and ties
## - 142 unique values
## - Original quantiles:
## 0% 25% 50% 75% 100%
## 0.00 3.04 4.76 6.04 10.00
##
## [1] "blastduration"
## orderNorm Transformation with 245 nonmissing obs and ties
## - 201 unique values
## - Original quantiles:
## 0% 25% 50% 75% 100%
## 0.0 792.8 1100.4 1299.6 2000.0
##
## [1] "blastintensity.duration"
## Standardized sqrt(x + a) Transformation with 245 nonmissing obs.:
## Relevant statistics:
## - a = 0
## - mean (before standardization) = 68.97142
## - sd (before standardization) = 32.98318
##
## [1] "blastintensity.first"
## orderNorm Transformation with 245 nonmissing obs and ties
## - 11 unique values
## - Original quantiles:
## 0% 25% 50% 75% 100%
## 0 3 7 9 10
##
## [1] "blastduration.first"
## center_scale(x) Transformation with 245 nonmissing obs.
## Estimated statistics:
## - mean (before standardization) = 1149.265
## - sd (before standardization) = 632.9021
##
## [1] "blastintensity.duration.first"
## orderNorm Transformation with 245 nonmissing obs and ties
## - 59 unique values
## - Original quantiles:
## 0% 25% 50% 75% 100%
## 0 2000 7000 13300 20000
##
## [1] "KIMS"
## Standardized asinh(x) Transformation with 245 nonmissing obs.:
## Relevant statistics:
## - mean (before standardization) = 1.935386
## - sd (before standardization) = 0.1339016
##
## [1] "BSCS"
## Standardized asinh(x) Transformation with 245 nonmissing obs.:
## Relevant statistics:
## - mean (before standardization) = 1.9655
## - sd (before standardization) = 0.2242601
##
## [1] "BAQ"
## Standardized asinh(x) Transformation with 245 nonmissing obs.:
## Relevant statistics:
## - mean (before standardization) = 1.799905
## - sd (before standardization) = 0.2964333
##
## [1] "SOPT"
## Standardized asinh(x) Transformation with 245 nonmissing obs.:
## Relevant statistics:
## - mean (before standardization) = 3.176059
## - sd (before standardization) = 0.6035038
##
## [1] "IAT"
## center_scale(x) Transformation with 245 nonmissing obs.
## Estimated statistics:
## - mean (before standardization) = -0.5083574
## - sd (before standardization) = 0.3708143
col.list <- paste0(col.list, ".BN")Let’s check if normality was corrected.
# Group normality
lapply(col.list, function(x)
nice_normality(data,
x,
"condition",
shapiro = TRUE,
title = x,
histogram = TRUE))## [[1]]
##
## [[2]]
##
## [[3]]
##
## [[4]]
##
## [[5]]
##
## [[6]]
##
## [[7]]
##
## [[8]]
##
## [[9]]
##
## [[10]]
##
## [[11]]
Looks rather reasonable now, though not perfect (fortunately t-tests are quite robust against violations of normality). It seems though that for some reason our BSCS variable was made worse by the transformation. Let’s use the untransformed variable for this time then.
data$BSCS.BN <- scale(data$BSCS)We will also standardize the variables based on the MAD.
# Standardize and center main continuous IV variable (based on MAD)
# data <- data %>%
# mutate(across(all_of(col.list),
# ~scale_mad(.x),
# .names = "{col}"))We can now resume with the next step: checking variance.
# Plotting variance
plots(lapply(col.list, function(x) {
nice_varplot(data, x, group = "condition")
}),
n_columns = 3)Variance looks good. No group has four times the variance of any other group. We can now resume with checking outliers.
# Using boxplots
plots(lapply(col.list, function(x) {
ggplot(data, aes(condition, !!sym(x))) +
geom_boxplot()
}),
n_columns = 3)There are some outliers, but nothing unreasonable. Let’s still check with the 3 median absolute deviations (MAD) method.
find_mad(data, col.list, criteria = 3)## 5 outlier(s) based on 3 median absolute deviations for variable(s):
## blastintensity.BN, blastduration.BN, blastintensity.duration.BN, blastintensity.first.BN, blastduration.first.BN, blastintensity.duration.first.BN, KIMS.BN, BSCS.BN, BAQ.BN, SOPT.BN, IAT.BN,
##
## Outliers per variable:
## $SOPT.BN
## Row SOPT.BN
## 1 61 -3.802272
## 2 84 -2.870609
## 3 90 -2.870609
## 4 218 -3.802272
##
## $IAT.BN
## Row IAT.BN
## 1 54 2.655326
There are 6 outliers after our transformations.
Visual assessment and the MAD method confirm we have some outlier values. We could ignore them but because they could have disproportionate influence on the models, one recommendation is to winsorize them by bringing the values at 3 SD. Instead of using the standard deviation around the mean, however, we use the absolute deviation around the median, as it is more robust to extreme observations. For a discussion, see:
Leys, C., Klein, O., Bernard, P., & Licata, L. (2013). Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median. Journal of Experimental Social Psychology, 49(4), 764–766. https://doi.org/10.1016/j.jesp.2013.03.013
# Winsorize variables of interest with MAD
data <- data %>%
mutate(across(all_of(col.list),
winsorize_mad,
.names = "{.col}.w"))
# Update col.list
col.list <- paste0(col.list, ".w")Outliers are still present but were brought back within reasonable limits, where applicable. We are now ready to compare the group condition (Control vs. Mindfulness Priming) across our different variables with the t-tests.
nice_t_test(data,
response = col.list,
group = "condition") %>%
nice_table(highlight = 0.10)## Using Welch t-test (base R's default; cf. https://doi.org/10.5334/irsp.82).
## For the Student t-test, use `var.equal = TRUE`.
##
##
Dependent Variable | t | df | p | d | 95% CI |
blastintensity.BN.w | -1.69 | 235.03 | .092 | -0.22 | [-0.47, 0.03] |
blastduration.BN.w | -0.69 | 238.85 | .491 | -0.09 | [-0.34, 0.16] |
blastintensity.duration.BN.w | -1.35 | 235.28 | .177 | -0.17 | [-0.42, 0.08] |
blastintensity.first.BN.w | 0.15 | 239.90 | .882 | 0.02 | [-0.23, 0.27] |
blastduration.first.BN.w | 0.25 | 241.71 | .800 | 0.03 | [-0.22, 0.28] |
blastintensity.duration.first.BN.w | 0.14 | 240.64 | .893 | 0.02 | [-0.23, 0.27] |
KIMS.BN.w | -0.87 | 238.36 | .383 | -0.11 | [-0.36, 0.14] |
BSCS.BN.w | -0.56 | 241.74 | .578 | -0.07 | [-0.32, 0.18] |
BAQ.BN.w | 1.34 | 241.81 | .182 | 0.17 | [-0.08, 0.42] |
SOPT.BN.w | 0.51 | 237.40 | .609 | 0.07 | [-0.19, 0.32] |
IAT.BN.w | 1.33 | 242.70 | .184 | 0.17 | [-0.08, 0.42] |
Interpretation: There is no clear group effect from our experimental condition on our different variables. However, there is a marginal effect of condition on blast intensity, whereas the mindfulness group has slightly higher blast intensity than the control group. Let’s visualize this effect.
nice_violin(data,
group = "condition",
response = "blastintensity.BN.w",
comp1 = 1,
comp2 = 2,
obs = TRUE)nice_violin(data,
group = "condition",
response = "blastduration.BN.w",
comp1 = 1,
comp2 = 2,
obs = TRUE)nice_violin(data,
group = "condition",
response = "blastintensity.duration.BN.w",
comp1 = 1,
comp2 = 2,
obs = TRUE)nice_violin(data,
group = "condition",
response = "blastintensity.first.BN.w",
comp1 = 1,
comp2 = 2,
obs = TRUE)Let’s extract the means and standard deviations for journal reporting.
data %>%
group_by(condition) %>%
summarize(M = mean(blastintensity),
SD = sd(blastintensity),
N = n()) %>%
nice_table(width = 0.40)condition | M | SD | N |
Control | 4.38 | 2.32 | 128 |
Mindfulness | 4.93 | 2.57 | 117 |
data %>%
group_by(condition) %>%
summarize(M = mean(blastduration),
SD = sd(blastduration),
N = n()) %>%
nice_table(width = 0.40)condition | M | SD | N |
Control | 1,019.01 | 440.26 | 128 |
Mindfulness | 1,061.38 | 471.63 | 117 |
data %>%
group_by(condition) %>%
summarize(M = mean(blastintensity.duration),
SD = sd(blastintensity.duration),
N = n()) %>%
nice_table(width = 0.40)condition | M | SD | N |
Control | 5,368.70 | 4,245.46 | 128 |
Mindfulness | 6,356.67 | 5,041.47 | 117 |
data %>%
group_by(condition) %>%
summarize(M = mean(blastintensity.first),
SD = sd(blastintensity.first),
N = n()) %>%
nice_table(width = 0.40)condition | M | SD | N |
Control | 5.89 | 3.45 | 128 |
Mindfulness | 5.91 | 3.48 | 117 |
Let’s see if our variables don’t interact together with our experimental condition. But first, let’s test the models assumptions.
big.mod1 <- lm(blastintensity.BN.w ~ condition_dum*KIMS.BN.w +
condition_dum*BSCS.BN.w + condition_dum*BAQ.BN.w +
condition_dum*SOPT.BN.w + condition_dum*IAT.BN.w,
data = data, na.action="na.exclude")
check_model(big.mod1)big.mod2 <- lm(blastduration.BN.w ~ condition_dum*KIMS.BN.w +
condition_dum*BSCS.BN.w + condition_dum*BAQ.BN.w +
condition_dum*SOPT.BN.w + condition_dum*IAT.BN.w,
data = data, na.action="na.exclude")
check_model(big.mod2)big.mod3 <- lm(blastintensity.duration.BN.w ~ condition_dum*KIMS.BN.w +
condition_dum*BSCS.BN.w + condition_dum*BAQ.BN.w +
condition_dum*SOPT.BN.w + condition_dum*IAT.BN.w,
data = data, na.action="na.exclude")
check_model(big.mod3)big.mod4 <- lm(blastintensity.first.BN.w ~ condition_dum*KIMS.BN.w +
condition_dum*BSCS.BN.w + condition_dum*BAQ.BN.w +
condition_dum*SOPT.BN.w + condition_dum*IAT.BN.w,
data = data, na.action="na.exclude")
check_model(big.mod4)All the models assumptions look pretty good overall actually, even with all these variables. The lines for linearity and homoscedasticity are a bit skewed but nothing too crazy. Let’s now look at the results.
big.mod1 %>%
nice_lm() %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor | df | b | t | p | sr2 |
blastintensity.BN.w | condition_dum | 233 | 0.25 | 2.15 | .033 | .02 |
blastintensity.BN.w | KIMS.BN.w | 233 | 0.14 | 1.39 | .164 | .01 |
blastintensity.BN.w | BSCS.BN.w | 233 | -0.14 | -1.55 | .123 | .01 |
blastintensity.BN.w | BAQ.BN.w | 233 | 0.09 | 0.95 | .344 | .00 |
blastintensity.BN.w | SOPT.BN.w | 233 | 0.25 | 2.65 | .009 | .02 |
blastintensity.BN.w | IAT.BN.w | 233 | 0.16 | 1.92 | .056 | .01 |
blastintensity.BN.w | condition_dum:KIMS.BN.w | 233 | -0.19 | -1.38 | .168 | .01 |
blastintensity.BN.w | condition_dum:BSCS.BN.w | 233 | 0.48 | 3.27 | .001 | .04 |
blastintensity.BN.w | condition_dum:BAQ.BN.w | 233 | 0.04 | 0.33 | .744 | .00 |
blastintensity.BN.w | condition_dum:SOPT.BN.w | 233 | 0.10 | 0.81 | .421 | .00 |
blastintensity.BN.w | condition_dum:IAT.BN.w | 233 | -0.15 | -1.22 | .224 | .01 |
big.mod2 %>%
nice_lm() %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor | df | b | t | p | sr2 |
blastduration.BN.w | condition_dum | 233 | 0.13 | 1.16 | .247 | .00 |
blastduration.BN.w | KIMS.BN.w | 233 | 0.10 | 1.01 | .312 | .00 |
blastduration.BN.w | BSCS.BN.w | 233 | -0.12 | -1.27 | .206 | .01 |
blastduration.BN.w | BAQ.BN.w | 233 | 0.04 | 0.43 | .666 | .00 |
blastduration.BN.w | SOPT.BN.w | 233 | 0.31 | 3.30 | .001 | .04 |
blastduration.BN.w | IAT.BN.w | 233 | 0.22 | 2.56 | .011 | .02 |
blastduration.BN.w | condition_dum:KIMS.BN.w | 233 | -0.18 | -1.33 | .186 | .01 |
blastduration.BN.w | condition_dum:BSCS.BN.w | 233 | 0.50 | 3.49 | .001 | .04 |
blastduration.BN.w | condition_dum:BAQ.BN.w | 233 | 0.09 | 0.68 | .497 | .00 |
blastduration.BN.w | condition_dum:SOPT.BN.w | 233 | 0.07 | 0.54 | .590 | .00 |
blastduration.BN.w | condition_dum:IAT.BN.w | 233 | -0.16 | -1.28 | .203 | .01 |
big.mod3 %>%
nice_lm() %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor | df | b | t | p | sr2 |
blastintensity.duration.BN.w | condition_dum | 233 | 0.22 | 1.88 | .062 | .01 |
blastintensity.duration.BN.w | KIMS.BN.w | 233 | 0.13 | 1.34 | .180 | .01 |
blastintensity.duration.BN.w | BSCS.BN.w | 233 | -0.14 | -1.50 | .136 | .01 |
blastintensity.duration.BN.w | BAQ.BN.w | 233 | 0.08 | 0.85 | .396 | .00 |
blastintensity.duration.BN.w | SOPT.BN.w | 233 | 0.29 | 3.06 | .002 | .03 |
blastintensity.duration.BN.w | IAT.BN.w | 233 | 0.19 | 2.26 | .024 | .02 |
blastintensity.duration.BN.w | condition_dum:KIMS.BN.w | 233 | -0.22 | -1.58 | .115 | .01 |
blastintensity.duration.BN.w | condition_dum:BSCS.BN.w | 233 | 0.54 | 3.65 | < .001 | .05 |
blastintensity.duration.BN.w | condition_dum:BAQ.BN.w | 233 | 0.08 | 0.58 | .566 | .00 |
blastintensity.duration.BN.w | condition_dum:SOPT.BN.w | 233 | 0.08 | 0.66 | .512 | .00 |
blastintensity.duration.BN.w | condition_dum:IAT.BN.w | 233 | -0.16 | -1.29 | .198 | .01 |
big.mod4 %>%
nice_lm() %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor | df | b | t | p | sr2 |
blastintensity.first.BN.w | condition_dum | 233 | 0.01 | 0.10 | .918 | .00 |
blastintensity.first.BN.w | KIMS.BN.w | 233 | -0.07 | -0.76 | .450 | .00 |
blastintensity.first.BN.w | BSCS.BN.w | 233 | -0.06 | -0.73 | .465 | .00 |
blastintensity.first.BN.w | BAQ.BN.w | 233 | 0.09 | 0.96 | .337 | .00 |
blastintensity.first.BN.w | SOPT.BN.w | 233 | 0.14 | 1.57 | .118 | .01 |
blastintensity.first.BN.w | IAT.BN.w | 233 | 0.13 | 1.54 | .124 | .01 |
blastintensity.first.BN.w | condition_dum:KIMS.BN.w | 233 | 0.04 | 0.32 | .750 | .00 |
blastintensity.first.BN.w | condition_dum:BSCS.BN.w | 233 | 0.36 | 2.58 | .010 | .03 |
blastintensity.first.BN.w | condition_dum:BAQ.BN.w | 233 | -0.08 | -0.62 | .533 | .00 |
blastintensity.first.BN.w | condition_dum:SOPT.BN.w | 233 | 0.08 | 0.68 | .500 | .00 |
blastintensity.first.BN.w | condition_dum:IAT.BN.w | 233 | -0.08 | -0.72 | .472 | .00 |
big.mod5 %>%
nice_lm() %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor | df | b | t | p | sr2 |
blastduration.first.BN.w | condition_dum | 233 | 0.00 | 0.00 | .996 | .00 |
blastduration.first.BN.w | KIMS.BN.w | 233 | -0.08 | -0.76 | .449 | .00 |
blastduration.first.BN.w | BSCS.BN.w | 233 | -0.02 | -0.23 | .816 | .00 |
blastduration.first.BN.w | BAQ.BN.w | 233 | 0.01 | 0.12 | .905 | .00 |
blastduration.first.BN.w | SOPT.BN.w | 233 | 0.16 | 1.59 | .113 | .01 |
blastduration.first.BN.w | IAT.BN.w | 233 | 0.13 | 1.39 | .167 | .01 |
blastduration.first.BN.w | condition_dum:KIMS.BN.w | 233 | -0.07 | -0.49 | .622 | .00 |
blastduration.first.BN.w | condition_dum:BSCS.BN.w | 233 | 0.42 | 2.70 | .007 | .03 |
blastduration.first.BN.w | condition_dum:BAQ.BN.w | 233 | 0.13 | 0.93 | .354 | .00 |
blastduration.first.BN.w | condition_dum:SOPT.BN.w | 233 | 0.08 | 0.60 | .548 | .00 |
blastduration.first.BN.w | condition_dum:IAT.BN.w | 233 | -0.15 | -1.14 | .256 | .01 |
big.mod6 %>%
nice_lm() %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor | df | b | t | p | sr2 |
blastintensity.duration.first.BN.w | condition_dum | 233 | 0.02 | 0.14 | .893 | .00 |
blastintensity.duration.first.BN.w | KIMS.BN.w | 233 | -0.08 | -0.82 | .414 | .00 |
blastintensity.duration.first.BN.w | BSCS.BN.w | 233 | -0.04 | -0.45 | .651 | .00 |
blastintensity.duration.first.BN.w | BAQ.BN.w | 233 | 0.07 | 0.70 | .486 | .00 |
blastintensity.duration.first.BN.w | SOPT.BN.w | 233 | 0.17 | 1.82 | .070 | .01 |
blastintensity.duration.first.BN.w | IAT.BN.w | 233 | 0.11 | 1.30 | .194 | .01 |
blastintensity.duration.first.BN.w | condition_dum:KIMS.BN.w | 233 | -0.02 | -0.14 | .886 | .00 |
blastintensity.duration.first.BN.w | condition_dum:BSCS.BN.w | 233 | 0.41 | 2.91 | .004 | .03 |
blastintensity.duration.first.BN.w | condition_dum:BAQ.BN.w | 233 | 0.03 | 0.22 | .825 | .00 |
blastintensity.duration.first.BN.w | condition_dum:SOPT.BN.w | 233 | 0.05 | 0.37 | .714 | .00 |
blastintensity.duration.first.BN.w | condition_dum:IAT.BN.w | 233 | -0.11 | -0.90 | .371 | .00 |
Interpretation: The condition by trait self-control (brief self-control scale, BSCS) interaction comes up for all variables (so it must be somewhat reliable).
Let’s plot the main significant interaction(s).
interact_plot(big.mod1, pred = "condition_dum", modx = "BSCS.BN.w",
modxvals = NULL, interval = TRUE, x.label = "Condition",
pred.labels = c("Control", "Mindfulness"),
legend.main = "Trait Self-Control")interact_plot(big.mod2, pred = "condition_dum", modx = "BSCS.BN.w",
modxvals = NULL, interval = TRUE, x.label = "Condition",
pred.labels = c("Control", "Mindfulness"),
legend.main = "Trait Self-Control")interact_plot(big.mod3, pred = "condition_dum", modx = "BSCS.BN.w",
modxvals = NULL, interval = TRUE, x.label = "Condition",
pred.labels = c("Control", "Mindfulness"),
legend.main = "Trait Self-Control")interact_plot(big.mod4, pred = "condition_dum", modx = "BSCS.BN.w",
modxvals = NULL, interval = TRUE, x.label = "Condition",
pred.labels = c("Control", "Mindfulness"),
legend.main = "Trait Self-Control")Interpretation: The interaction is pretty much the same for all models. Counterintuitively, for people with low self-control, the priming mindfulness condition relates to lower aggression relative to the control condition. In contrast, for people with high self-control, the priming mindfulness condition relates to higher aggression.
Let’s look at the simple slopes now (only for the significant interaction).
big.mod1 %>%
nice_lm_slopes(predictor = "condition_dum",
moderator = "BSCS.BN.w") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor (+/-1 SD) | df | b | t | p | sr2 |
blastintensity.BN.w | condition_dum (LOW-BSCS.BN.w) | 233 | -0.23 | -1.20 | .232 | .01 |
blastintensity.BN.w | condition_dum (MEAN-BSCS.BN.w) | 233 | 0.25 | 2.15 | .033 | .02 |
blastintensity.BN.w | condition_dum (HIGH-BSCS.BN.w) | 233 | 0.73 | 3.90 | < .001 | .05 |
big.mod2 %>%
nice_lm_slopes(predictor = "condition_dum",
moderator = "BSCS.BN.w") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor (+/-1 SD) | df | b | t | p | sr2 |
blastduration.BN.w | condition_dum (LOW-BSCS.BN.w) | 233 | -0.37 | -1.99 | .048 | .01 |
blastduration.BN.w | condition_dum (MEAN-BSCS.BN.w) | 233 | 0.13 | 1.16 | .247 | .00 |
blastduration.BN.w | condition_dum (HIGH-BSCS.BN.w) | 233 | 0.64 | 3.45 | .001 | .04 |
big.mod3 %>%
nice_lm_slopes(predictor = "condition_dum",
moderator = "BSCS.BN.w") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor (+/-1 SD) | df | b | t | p | sr2 |
blastintensity.duration.BN.w | condition_dum (LOW-BSCS.BN.w) | 233 | -0.31 | -1.66 | .097 | .01 |
blastintensity.duration.BN.w | condition_dum (MEAN-BSCS.BN.w) | 233 | 0.22 | 1.88 | .062 | .01 |
blastintensity.duration.BN.w | condition_dum (HIGH-BSCS.BN.w) | 233 | 0.76 | 4.02 | < .001 | .06 |
big.mod4 %>%
nice_lm_slopes(predictor = "condition_dum",
moderator = "BSCS.BN.w") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor (+/-1 SD) | df | b | t | p | sr2 |
blastintensity.first.BN.w | condition_dum (LOW-BSCS.BN.w) | 233 | -0.35 | -1.95 | .053 | .01 |
blastintensity.first.BN.w | condition_dum (MEAN-BSCS.BN.w) | 233 | 0.01 | 0.10 | .918 | .00 |
blastintensity.first.BN.w | condition_dum (HIGH-BSCS.BN.w) | 233 | 0.37 | 2.08 | .039 | .02 |
big.mod5 %>%
nice_lm_slopes(predictor = "condition_dum",
moderator = "BSCS.BN.w") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor (+/-1 SD) | df | b | t | p | sr2 |
blastduration.first.BN.w | condition_dum (LOW-BSCS.BN.w) | 233 | -0.42 | -2.10 | .037 | .02 |
blastduration.first.BN.w | condition_dum (MEAN-BSCS.BN.w) | 233 | 0.00 | 0.00 | .996 | .00 |
blastduration.first.BN.w | condition_dum (HIGH-BSCS.BN.w) | 233 | 0.42 | 2.11 | .036 | .02 |
big.mod6 %>%
nice_lm_slopes(predictor = "condition_dum",
moderator = "BSCS.BN.w") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor (+/-1 SD) | df | b | t | p | sr2 |
blastintensity.duration.first.BN.w | condition_dum (LOW-BSCS.BN.w) | 233 | -0.40 | -2.18 | .031 | .02 |
blastintensity.duration.first.BN.w | condition_dum (MEAN-BSCS.BN.w) | 233 | 0.02 | 0.14 | .893 | .00 |
blastintensity.duration.first.BN.w | condition_dum (HIGH-BSCS.BN.w) | 233 | 0.43 | 2.35 | .020 | .02 |
Interpretation: The effect of priming mindfulness on blast intensity is only significant for people with a high self-control.
Based on the results, it seems that the interaction of interest comes up for all six measures of blast aggression (intensity, duration, the combination of the two, and the first blast of each type), suggesting it is reliable.
The effect sizes are slightly lower for measures of first blast (sr2 = 0.3) than the average intensity or duration (sr2 = 0.4), or intensity * duration (sr2 = 0.5).
Therefore, based on the marginally larger effect size, perhaps it does make sense to use the intensity * duration combination in future studies. My intuition is also that the effect is more reliable for reactive aggression (all trials) than proactive aggression (first measure only). Another reason to avoid using only the first trials is that they lead to problems with the distributions (i.e., they are not normal and are difficult to transform to normality).
report::cite_packages(sessionInfo())